6 research outputs found

    A suboptimal estimator of the sampling jitter variance using the bispectrum

    Full text link
    We consider the problem of estimating parameters of an irregular sampling process defined as a uniform sampling process in which the deviations from the nominal sampling times constitute a random IID process (jitter). Emphasis is placed on estimating the variance of the jitter, based on observation of samples taken from a continuous band-limited third-order stationary process. We derive an estimation procedure which uses the bispectrum estimates of a process with a priori known bispectrum. Derivation of the generalized likelihood ratio in the bispectral domain, leads to a statistic with which a bispectrum-based maximum likelihood estimation can be done. We propose a suboptimal estimator, and show that it is asymptotically unbiased and consistent. The dependence of the estimator's performance on the data length and the skewness is studied for a specific example. The estimator's variance is compared to the bispectrum-based Cramer-Rao bound (BCRB), and is shown to approach it for sufficiently large data length or skewness. Computer simulations verify the effectiveness of the proposed estimation method for small jitter.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/31464/1/0000386.pd

    Feasibility study of parameter estimation of random sampling jitter using the bispectrum

    Full text link
    An actual sampling process can be modeled as a random process, which consists of the regular (uniform) deterministic sampling process plus an error in the sampling times which constitutes a zero-mean noise (the jitter). In this paper we discuss the problem of estimating the jitter process. By assuming that the jitter process is an i.i.d. one, with standard deviation that is small compared to the regular sampling time, we show that the variance of the jitter process can be estimated from the n th order spectrum of the sampled data, n =2, 3, i.e., the jitter variance can be extracted from the 2nd-order spectrum or the 3rd-order spectrum (the bispectrum) of the sampled data, provided the continuous signal spectrum is known. However when the signal skewness exceeds a certain level, the potential performance of the bispectrum-based estimation is better than that of the spectrum-based estimation. Moreover, the former can also provide jitter variance estimates when the continuous signal spectrum is unknown while the latter cannot. This suggests that the bispectrum of the sampled data is potentially better for estimating any parameter of the sampling jitter process, once the signal skewness is sufficiently large.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/43577/1/34_2005_Article_BF01183740.pd

    Recursive algorithms for digital communications using the discrete wavelet transform.

    Full text link
    The goal in digital communications is to efficiently and reliably transmit digital information through a channel. In order to optimally decode the transmitted symbols, it is often necessary to have good estimates of the channel parameters. In this thesis a class of recursive algorithms which perform joint Maximum Likelihood (ML) estimation of the channel parameters and the symbol sequence is developed. The channel parameters are assumed to consist of an unknown fixed complex gain (amplitude and phase), and time delay. The digital information is transmitted using binary or M-ary phase modulation and received in the presence of an additive white Gaussian noise. The algorithms presented in this work are based on a decomposition with respect to an orthonormal wavelet basis. This is motivated by the fact that the wavelet decomposition retains all the information in the observation, while facilitating processing by a recursive algorithm. In addition, the localization properties of the wavelet basis enable local updates of the symbol parameters. This results in an efficient digital algorithm with low complexity and small delay. In particular, the algorithm has a complexity per iteration that is quadratic in the number of users when applied in a multiuser system, while the optimal receiver which uses the matched filter outputs as a sufficient statistic has an exponential complexity in the number of users. The class of algorithms studied in this thesis applies to both single user and multiuser systems. First, a coordinate ascent algorithm for a single user system is developed. While the direct maximization of the likelihood function is analytically intractable, the recursive algorithm has simple updates which involve polynomial rooting for time delay estimation, discrete search for symbol estimation, and an analytical solution for gain estimation. The algorithm uses a look-up table for retrieving Fourier series coefficients of the transmitted pulse shape decomposition with respect to the wavelet basis. Simulations show fast convergence and attainment of estimation bounds. Next, the algorithm is extended to the multiuser case. Two versions have been developed: one using grouped coordinate ascent, and the other using the EM algorithm. Simulation results for a two-user system are shown. In addition, the problem of initializing the algorithm is addressed. Finally, an analysis of the single user algorithm is performed under some reasonable assumptions. It is shown that the algorithm has a fixed point at the true gain and time delay parameters in the limit of a large observation time.Ph.D.Applied SciencesElectrical engineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/130118/2/9712083.pd

    A maxmum likelihood digital receiver using coordinate ascent and the discrete wavelet transform

    No full text
    In this paper a Maximum Likelihood (ML) method is presented for joint estimation of amplitude, phase, time delay, and data symbols in a single user direct sequence spread spectrum communication system. Since maximization of the likelihood function is analytically intractable a novel coordinate ascent algorithm is used to obtain sequential updates of the data symbols and all unknown nuisance parameters. The novelty of the algorithm is due to the use of a multi-resolution expansion of the received signal and the use of polynomial rooting in the complex plane in place of a line search over the signal delay parameter. The multi-resolution structure of the algorithm is exploited to reduce sensitivity to impulsive noise via wavelet thresholding. Computer simulations of the single user system show that the algorithm has fast convergence, and comparison to theoretical lower bounds establishes that the algorithm achieves nearly optimal error performance
    corecore